Goto

Collaborating Authors

 better approach


Dealing with Sparse Datasets in Machine Learning

#artificialintelligence

This article was published as a part of the Data Science Blogathon. Missing data in machine learning is a type of data that contains null values, whereas Sparse data is a type of data that does not contain the actual values of features; it is a dataset containing a high amount of zero or null values. It is a different thing than missing data. Sparse datasets with high zero values can cause problems like over-fitting in the machine learning models and several other problems. That is why dealing with sparse data is one of the most hectic processes in machine learning.


Image Quantization with K-Means

#artificialintelligence

Quantization refers to a technique where we express a range of values by a single quantum value. For images, this means that we can compress an entire color range into one specific color. This technique is lossy i.e. we intentionally lose information in favor of lower memory consumption. In this tutorial, I will show you how to implement color quantization yourself with very few lines of code. We are going to use Python with scikit-learn, numpy, PIL, and matplotlib. Lets's start by downloading the beautiful image of "Old Man of Storr" taken by Pascal van Soest which we will work with (if you are on Windows or do not have access to wget simply download the image and save it as image.jpeg):


AI Might Not Be Your Best Source for Advice Just Yet

#artificialintelligence

Virtual assistants are wonderful at following your commands but absolutely terrible at giving life advice. Tidio editor Kazimierz Rajnerowicz spent over 30 hours asking half a dozen popular artificial intelligence (AI)-powered voice assistants and chatbots all kinds of questions and concluded that while virtual assistants are great at retrieving facts, they aren't advanced enough to hold a conversation. "AI today is pattern recognition," explained Liziana Carter, founder of conversational AI start-up Grow AI, to Lifewire in a conversation over email. "Expecting it to advise whether robbing a bank is right or wrong is expecting creative thinking from it, also known as AI General Intelligence, which we're far from right now." Rajnerowicz thought of the experiment in response to forecasts by Juniper Research that predicts the number of AI voice assistant devices in use will exceed the human population by 2024. "... a better approach may be to use that power to gain back time to spend on the things that make us unique as humans."


A (Much) Better Approach to Evaluate Your Machine Learning Model - KDnuggets

#artificialintelligence

It's crazy how difficult it can be for Data Scientists like myself to evaluate ML models using classic performance metrics properly. Even with access to multiple metrics and scoring methods, it is still challenging to understand the right metrics for the problems I -- and likely many others -- am facing. This is exactly why I use Snitch AI for most of my ML model quality evaluation. PS -- I've been an active member of developing Snitch AI for the past 2 years. Machine Learning Model Validation Tool Snitch AI Empower your Data Science team to deliver robust, trustworthy AI.


Rock Containerized GPU Machine Learning Development With VS Code - AI Summary

#artificialintelligence

Running machine learning algorithms on GPUs is a common practice. Although there are cloud ML services like Paperspace and Colab, the most convenient/flexible way to prototype is still a local machine. Since the beginning of machine learning libraries (e.g., TensorFlow, Torch and Caffe), dealing with Nvidia libraries has been a headache for many data scientists: To summarize, setting up a GPU ML environment will constantly mess up the existing infrastructure and often an OS reinstallation is needed to recover. The better approach is to develop inside a CUDA-enabled container where the development environment is isolated from the host and other projects. Running machine learning algorithms on GPUs is a common practice.


Digital Transformation: Peter Stone on "A Better Approach to Artificial Intelligence"

#artificialintelligence

Peter is David Bruton, Jr. Centennial Professor, University Distinguished Teaching Professor, Associate Chair, Department of Computer Science at The University of Texas in Austin. He is the founder and director of the Learning Agents Research Group (LARG) within the Artificial Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin, as well as chair of the University's Robotics Portfolio Program.


In Machine Learning, What is Better: More Data or better Algorithms

@machinelearnbot

"In machine learning, is more data always better than better algorithms?" No. There are times when more data helps, there are times when it doesn't. Probably one of the most famous quotes defending the power of data is that of Google's Research Director Peter Norvig claiming that "We don't have better algorithms. We just have more data.". This quote is usually linked to the article on "The Unreasonable Effectiveness of Data", co-authored by Norvig himself (you should probably be able to find the pdf on the web although the original is behind the IEEE paywall).


In Machine Learning, What is Better: More Data or better Algorithms

#artificialintelligence

"In machine learning, is more data always better than better algorithms?" No. There are times when more data helps, there are times when it doesn't. Probably one of the most famous quotes defending the power of data is that of Google's Research Director Peter Norvig claiming that "We don't have better algorithms. We just have more data.". This quote is usually linked to the article on "The Unreasonable Effectiveness of Data", co-authored by Norvig himself (you should probably be able to find the pdf on the web although the original is behind the IEEE paywall).


How to train an autoencoder ? • /r/MachineLearning

@machinelearnbot

My data is 64 x 64 images. I have tried the obvious approach of training the whole autoencoder end-to-end, but I could not get good reproductions as I decreased the feature vector length.


Committee of Intelligent Machines -- Unity in Diversity of #NeuralNetworks – Autonomous Agents -- #AI

#artificialintelligence

Have you noticed that the best fitness functions that most creatures adopt for survival is to work in collectives? School of fishes, Hive of bees, the nest of ants, horde of wildebeests or flock of birds all have something in common. What is even more perplexing about nature is the ecological inter-dependence of different species, collectively surviving to see a better day. This fitness function is a sum of averages of sorts which enables a different form of collective strength. Its called Unity in Diversity.